Goto

Collaborating Authors

 ai wrong


Are We Thinking about AI Wrong?

#artificialintelligence

Editor's note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds. AI research is often framed as a kind of human-versus-machine rivalry that will inevitably lead to the defeat -- and even wholesale replacement of -- human beings by artificial superintelligences that have their own sense of agency, and their own goals. Divya Siddarth disagrees with this framing. Instead, she argues, this perspective leads us to focus on applications of AI that are neither as profitable as they could be, nor safe enough to prevent us from potentially catastrophic consequences of dangerous AI systems in the long run.

  Country: Asia > Taiwan (0.08)

How cybersecurity is getting AI wrong

#artificialintelligence

The cybersecurity industry is rapidly embracing the notion of "zero trust", where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted. However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted "ground truth" as reference point. This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all. Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.


How cybersecurity is getting AI wrong

#artificialintelligence

The cybersecurity industry is rapidly embracing the notion of "zero trust", where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted. However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted "ground truth" as reference point. This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all. Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment.


Andrew Ng thinks your company is doing AI wrong

#artificialintelligence

Andrew Ng knows a thing or two about artificial intelligence. The former head of Google Brain and prior chief scientist at Baidu, Ng also co-founded Coursera and regularly teaches popular courses on the technology online and at Stanford. And he runs Landing AI, which provides manufacturers (and soon, other industries) with an AI platform to help developers more easily build and deploy computer vision models. That experience has given Ng a deep understanding of the benefits that AI can produce -- and the limitations of the tech. As Ng expands his work outside of consumer internet companies, he's seeing a pattern: Organizations are setting their AI ambitions too high.